Neural Variational Random Field Learning

نویسندگان

  • Volodymyr Kuleshov
  • Stefano Ermon
چکیده

We propose variational bounds on the log-likelihood of an undirected probabilistic graphical model p that are parametrized by flexible approximating distributions q. These bounds are tight when q = p, are convex in the parameters of q for interesting classes of q, and may be further parametrized by an arbitrarily complex neural network. When optimized jointly over q and p, our bounds enable us to accurately track the partition function during learning.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

VBALD - Variational Bayesian Approximation of Log Determinants

Evaluating the log determinant of a positive definite matrix is ubiquitous in machine learning. Applications thereof range from Gaussian processes, minimum-volume ellipsoids, metric learning, kernel learning, Bayesian neural networks, Determinental Point Processes, Markov random fields to partition functions of discrete graphical models. In order to avoid the canonical, yet prohibitive, Cholesk...

متن کامل

Superpixel Filtering for Mean Field Inference in CRFs Integrated with Convolutional Neural Networks

Low level computer vision tasks such as per pixel labelling have recently been approached from the perspective of representation learning. Using deep architectures, strong, characteristic features can be learnt through applying a series of operations such as convolutions, resampling and nonlinearities. These learnt representations are then used in a classifier to assign labels to pixels, yieldi...

متن کامل

A Fast Variational Approach for Learning Markov Random Field Language Models

Language modelling is a fundamental building block of natural language processing. However, in practice the size of the vocabulary limits the distributions applicable for this task: specifically, one has to either resort to local optimization methods, such as those used in neural language models, or work with heavily constrained distributions. In this work, we take a step towards overcoming the...

متن کامل

The Variational Gaussian Approximation Revisited

The variational approximation of posterior distributions by multivariate gaussians has been much less popular in the machine learning community compared to the corresponding approximation by factorizing distributions. This is for a good reason: the gaussian approximation is in general plagued by an Omicron(N)(2) number of variational parameters to be optimized, N being the number of random vari...

متن کامل

On Modern Deep Learning and Variational Inference

Bayesian modelling and variational inference are rooted in Bayesian statistics, and easily benefit from the vast literature in the field. In contrast, deep learning lacks a solid mathematical grounding. Instead, empirical developments in deep learning are often justified by metaphors, evading the unexplained principles at play. It is perhaps astonishing then that most modern deep learning model...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016